68 research outputs found

    Parallel architectures for entropy coding in a dual-standard ultra-HD video encoder

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Includes bibliographical references (p. 97-98).The mismatch between the rapid increase in resolution requirements and the slower increase in energy capacity demand more aggressive low-power circuit design techniques to maintain battery life of hand-held multimedia devices. As the operating voltage is lowered to reduce power consumption, the maximum operating frequency of the system must also decrease while the performance requirements remain constant. To meet these performance constraints imposed by the high resolution and complex functionality of video processing systems, novel techniques for increasing throughput are explored. In particular, the entropy coding functional block faces the most stringent requirements to deliver the necessary throughput due to its highly serial nature, especially to sustain real-time encoding. This thesis proposes parallel architectures for high-performance entropy coding for high-resolution, dual-standard video encoding. To demonstrate the most aggressive techniques for achieving standard reconfigurability, two markedly different video compression standards (H.264/AVC and VC-1) are supported. Specifically, the entropy coder must process data generated from a quad full-HD (4096x2160 pixels per frame, the equivalent of four full-HD frames) video at a frame rate of 30 frames per second and perform lossless compression to generate an output bitstream. This block will be integrated into a dual-standard video encoder chip targeted for operation at 0.6V, which will be fabricated following the completion of this thesis. Parallelism, as well as other techniques applied at the syntax element or bit level, are used to achieve the overall throughput requirements. Three frames of video data are processed in parallel at the system level, and varying degrees of parallelism are employed within the entropy coding block for each standard. The VC-1 entropy encoder block encodes 735M symbols per second with a gate count of 136.6K and power consumption of 304.5 pW, and the H.264 block encodes 4.97G binary symbols per second through three-frame parallelism and a 6-bin cascaded pipelining architecture with a critical path delay of 20.05 ns.by Bonnie K. Y. Lam.S.M

    Longitudinal grey and white matter changes in frontotemporal dementia and Alzheimer's disease

    Get PDF
    Behavioural variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD) dementia are characterised by progressive brain atrophy. Longitudinal MRI volumetry may help to characterise ongoing structural degeneration and support the differential diagnosis of dementia subtypes. Automated, observer-independent atlas-based MRI volumetry was applied to analyse 102 MRI data sets from 15 bvFTD, 14 AD, and 10 healthy elderly control participants with consecutive scans over at least 12 months. Anatomically defined targets were chosen a priori as brain structures of interest. Groups were compared regarding volumes at clinic presentation and annual change rates. Baseline volumes, especially of grey matter compartments, were significantly reduced in bvFTD and AD patients. Grey matter volumes of the caudate and the gyrus rectus were significantly smaller in bvFTD than AD. The bvFTD group could be separated from AD on the basis of caudate volume with high accuracy (79% cases correct). Annual volume decline was markedly larger in bvFTD and AD than controls, predominantly in white matter of temporal structures. Decline in grey matter volume of the lateral orbitofrontal gyrus separated bvFTD from AD and controls. Automated longitudinal MRI volumetry discriminates bvFTD from AD. In particular, greater reduction of orbitofrontal grey matter and temporal white matter structures after 12 months is indicative of bvFTD

    Grey and white matter correlates of recent and remote autobiographical memory retrieval:Insights from the dementias

    Get PDF
    The capacity to remember self-referential past events relies on the integrity of a distributed neural network. Controversy exists, however, regarding the involvement of specific brain structures for the retrieval of recently experienced versus more distant events. Here, we explored how characteristic patterns of atrophy in neurodegenerative disorders differentially disrupt remote versus recent autobiographical memory. Eleven behavioural-variant frontotemporal dementia, 10 semantic dementia, 15 Alzheimer's disease patients and 14 healthy older Controls completed the Autobiographical Interview. All patient groups displayed significant remote memory impairments relative to Controls. Similarly, recent period retrieval was significantly compromised in behavioural-variant frontotemporal dementia and Alzheimer's disease, yet semantic dementia patients scored in line with Controls. Voxel-based morphometry and diffusion tensor imaging analyses, for all participants combined, were conducted to investigate grey and white matter correlates of remote and recent autobiographical memory retrieval. Neural correlates common to both recent and remote time periods were identified, including the hippocampus, medial prefrontal, and frontopolar cortices, and the forceps minor and left hippocampal portion of the cingulum bundle. Regions exclusively implicated in each time period were also identified. The integrity of the anterior temporal cortices was related to the retrieval of remote memories, whereas the posterior cingulate cortex emerged as a structure significantly associated with recent autobiographical memory retrieval. This study represents the first investigation of the grey and white matter correlates of remote and recent autobiographical memory retrieval in neurodegenerative disorders. Our findings demonstrate the importance of core brain structures, including the medial prefrontal cortex and hippocampus, irrespective of time period, and point towards the contribution of discrete regions in mediating successful retrieval of distant versus recently experienced events

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    The genetic architecture of the human cerebral cortex

    Get PDF
    The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson's disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder

    31st Annual Meeting and Associated Programs of the Society for Immunotherapy of Cancer (SITC 2016) : part two

    Get PDF
    Background The immunological escape of tumors represents one of the main ob- stacles to the treatment of malignancies. The blockade of PD-1 or CTLA-4 receptors represented a milestone in the history of immunotherapy. However, immune checkpoint inhibitors seem to be effective in specific cohorts of patients. It has been proposed that their efficacy relies on the presence of an immunological response. Thus, we hypothesized that disruption of the PD-L1/PD-1 axis would synergize with our oncolytic vaccine platform PeptiCRAd. Methods We used murine B16OVA in vivo tumor models and flow cytometry analysis to investigate the immunological background. Results First, we found that high-burden B16OVA tumors were refractory to combination immunotherapy. However, with a more aggressive schedule, tumors with a lower burden were more susceptible to the combination of PeptiCRAd and PD-L1 blockade. The therapy signifi- cantly increased the median survival of mice (Fig. 7). Interestingly, the reduced growth of contralaterally injected B16F10 cells sug- gested the presence of a long lasting immunological memory also against non-targeted antigens. Concerning the functional state of tumor infiltrating lymphocytes (TILs), we found that all the immune therapies would enhance the percentage of activated (PD-1pos TIM- 3neg) T lymphocytes and reduce the amount of exhausted (PD-1pos TIM-3pos) cells compared to placebo. As expected, we found that PeptiCRAd monotherapy could increase the number of antigen spe- cific CD8+ T cells compared to other treatments. However, only the combination with PD-L1 blockade could significantly increase the ra- tio between activated and exhausted pentamer positive cells (p= 0.0058), suggesting that by disrupting the PD-1/PD-L1 axis we could decrease the amount of dysfunctional antigen specific T cells. We ob- served that the anatomical location deeply influenced the state of CD4+ and CD8+ T lymphocytes. In fact, TIM-3 expression was in- creased by 2 fold on TILs compared to splenic and lymphoid T cells. In the CD8+ compartment, the expression of PD-1 on the surface seemed to be restricted to the tumor micro-environment, while CD4 + T cells had a high expression of PD-1 also in lymphoid organs. Interestingly, we found that the levels of PD-1 were significantly higher on CD8+ T cells than on CD4+ T cells into the tumor micro- environment (p < 0.0001). Conclusions In conclusion, we demonstrated that the efficacy of immune check- point inhibitors might be strongly enhanced by their combination with cancer vaccines. PeptiCRAd was able to increase the number of antigen-specific T cells and PD-L1 blockade prevented their exhaus- tion, resulting in long-lasting immunological memory and increased median survival

    Energy scalable systems for 2D and 3D low-power ultrasound beamforming

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.Cataloged from PDF version of thesis.Includes bibliographical references (pages 119-125).In traditional ultrasound imaging systems, bulky and power-intensive mainframes are used to process the high number of waveforms acquired in parallel from a large transducer array. The computational power of these systems scales linearly with transducer count. However, there exist applications where basic functionality in low-power conditions may be favorable to an "all-or-nothing" system that only produces a high resolution image when enough power is supplied. This thesis presents systems designed to support energy-scalability at run-time, enabling the user to make the tradeoff between power and performance. First, a system-level energy model for a receive-side digital beamforming system is presented. Power-performance tradeoffs for the analog front-end, analog-to-digital converter, and digital beamformer are analyzed individually and then combined to account for the performance dependency between the functional components. These considerations inform a recommendation on design choices for the end-to-end system. Second, this thesis describes an energy-scalable 2-D beamformer that provides user-controlled run-time tradeoff between image quality and energy consumption. Architectural design choices that enable three operating modes are discussed. A test chip was fabricated in 65-nm low power CMOS technology. It can operate with functional correctness at 0.49 V, with a measured power of 185 [mu]W in real-time operation at 0.52 V. Finally, a software-based energy-scalable 3-D ultrasound beamformer is implemented on an embedded supercomputer. The energy consumption and corresponding imaging quality are measured and compared.by Bonnie Kit Ying Lam.Ph. D
    corecore